2,243 research outputs found

    Compression-aware Training of Deep Networks

    Get PDF
    In recent years, great progress has been made in a variety of application domains thanks to the development of increasingly deeper neural networks. Unfortunately, the huge number of units of these networks makes them expensive both computationally and memory-wise. To overcome this, exploiting the fact that deep networks are over-parametrized, several compression strategies have been proposed. These methods, however, typically start from a network that has been trained in a standard manner, without considering such a future compression. In this paper, we propose to explicitly account for compression in the training process. To this end, we introduce a regularizer that encourages the parameter matrix of each layer to have low rank during training. We show that accounting for compression during training allows us to learn much more compact, yet at least as effective, models than state-of-the-art compression techniques.Comment: Accepted at NIPS 201

    Deuterium-deuterium nuclear cross-sections in insulator and metallic environments

    Full text link
    The three-dimensional Thomas-Fermi (TF) model is used to simulate the variation of the d+d to t + p cross-section at low impact energies, when the target deuterium nucleus is embedded in metallic or insulator environments. Comparison of the computational results to recent experiments demonstrates that even though the TF model can explain some increase in the low energy cross section for metallic host, a full explanation of the experimental results is still lacking. Possible reasons for the disagreement are discussed.Comment: 6 pages;6 figures. Accepted for publication in Eur. Phys. Jour.

    Bringing Background into the Foreground: Making All Classes Equal in Weakly-supervised Video Semantic Segmentation

    Get PDF
    Pixel-level annotations are expensive and time-consuming to obtain. Hence, weak supervision using only image tags could have a significant impact in semantic segmentation. Recent years have seen great progress in weakly-supervised semantic segmentation, whether from a single image or from videos. However, most existing methods are designed to handle a single background class. In practical applications, such as autonomous navigation, it is often crucial to reason about multiple background classes. In this paper, we introduce an approach to doing so by making use of classifier heatmaps. We then develop a two-stream deep architecture that jointly leverages appearance and motion, and design a loss based on our heatmaps to train it. Our experiments demonstrate the benefits of our classifier heatmaps and of our two-stream architecture on challenging urban scene datasets and on the YouTube-Objects benchmark, where we obtain state-of-the-art results.Comment: 11 pages, 4 figures, 7 tables, Accepted in ICCV 201

    Highly efficient heavy-metal extraction from water with carboxylated graphene nanoflakes

    Get PDF
    Heavy metals such a lead or cadmium have a wide range of detrimental and devastating effects on human health. It is therefore of paramount importance to efficiently remove heavy metals from industrial wastewater streams as well as drinking water. Carbon materials, including graphene and graphene oxide (GO), have recently been advocated as efficient sorption materials for heavy metals. We show that highly carboxylated graphene nanoflakes (cx-GNF) outperform nano-graphene oxide (nGO) as well as traditional GO with respect to extracting Fe 2+ , Cu 2+ , Fe 3+ , Cd 2+ and Pb 2+ cations from water. The sorption capacity for Pb 2+ , for example, is more than six times greater for the cx-GNF compared to GO which is attributed to the efficient formation of lead carboxylates as well as strong cation-Ï€ interactions. The large numbers of carboxylic acid groups as well as the intact graphenic regions of the cx-GNF are therefore responsible for the strong binding of the heavy metal cations. Remarkably, the performance of the as-made cx-GNF can easily compete with previously reported carbon materials that have undergone additional chemical-functionalisation procedures for the purpose of heavy-metal extraction. Furthermore, the recyclability of the cx-GNF material with respect to Pb 2+ loading is demonstrated as well as the outstanding performance for Pb 2+ extraction in the presence of excess Ca 2+ or Mg 2+ cations which are often present under environmental conditions. Out of all the graphene materials, the cx-GNF therefore show the greatest potential for future application in heavy-metal extraction processes

    A simple and mild chemical oxidation route to high-purity nano-graphene oxide

    Get PDF
    Nano-graphene oxide (nGO) is used in a wide range of applications including cellular imaging, drug delivery, desalination and energy storage. Current preparation protocols are similar as for standard graphene oxide (GO) and typically rely on mixtures of sulfuric acid and potassium permanganate. We present a new route to nGO (∼30 nm diameter) using a quite defective arc-discharge carbon source and only 9 M nitric acid as the oxidising agent. The preparation can be scaled up proportionately with current GO protocols with 50 mL of half-concentrated nitric acid able to process one gram of arc-discharge material. The workup is straight forward and involves neutralization with sodium hydroxide which precipitates the sodium salt of nGO from solution. The only by-product of the new procedure is aqueous sodium nitrate which makes this protocol the cleanest route yet to nGO. The presence and quantities of functional groups in our nGO are determined and compared with standard GO. We anticipate that this new route to nGO will foster a range of new applications. The presence of highly reactive carboxylic anhydride groups on our nGO material in particular offers an excellent opportunity for purpose-specific chemical functionalization

    Distribution-Matching Embedding for Visual Domain Adaptation

    Get PDF
    Domain-invariant representations are key to addressing the domain shift problem where the training and test examples follow different distributions. Existing techniques that have attempted to match the distributions of the source and target domains typically compare these distributions in the original feature space. This space, however, may not be directly suitable for such a comparison, since some of the features may have been distorted by the domain shift, or may be domain specific. In this paper, we introduce a Distribution-Matching Embedding approach: An unsupervised domain adaptation method that overcomes this issue by mapping the data to a latent space where the distance between the empirical distributions of the source and target examples is minimized. In other words, we seek to extract the information that is invariant across the source and target data. In particular, we study two different distances to compare the source and target distributions: the Maximum Mean Discrepancy and the Hellinger distance. Furthermore, we show that our approach allows us to learn either a linear embedding, or a nonlinear one. We demonstrate the benefits of our approach on the tasks of visual object recognition, text categorization, and WiFi localization

    Tracking and Retexturing Cloth for RealTime Virtual Clothing Applications

    Get PDF
    Abstract. In this paper, we describe a dynamic texture overlay method from monocular images for real-time visualization of garments in a virtual mirror environment. Similar to looking into a mirror when trying on clothes, we create the same impression but for virtually textured garments. The mirror is replaced by a large display that shows the mirrored image of a camera capturing e.g. the upper body part of a person. By estimating the elastic deformations of the cloth from a single camera in the 2D image plane and recovering the illumination of the textured surface of a shirt in real time, an arbitrary virtual texture can be realistically augmented onto the moving garment such that the person seems to wear the virtual clothing. The result is a combination of the real video and the new augmented model yielding a realistic impression of the virtual piece of cloth

    Efficient Linear Programming for Dense CRFs

    Get PDF
    The fully connected conditional random field (CRF) with Gaussian pairwise potentials has proven popular and effective for multi-class semantic segmentation. While the energy of a dense CRF can be minimized accurately using a linear programming (LP) relaxation, the state-of-the-art algorithm is too slow to be useful in practice. To alleviate this deficiency, we introduce an efficient LP minimization algorithm for dense CRFs. To this end, we develop a proximal minimization framework, where the dual of each proximal problem is optimized via block coordinate descent. We show that each block of variables can be efficiently optimized. Specifically, for one block, the problem decomposes into significantly smaller subproblems, each of which is defined over a single pixel. For the other block, the problem is optimized via conditional gradient descent. This has two advantages: 1) the conditional gradient can be computed in a time linear in the number of pixels and labels; and 2) the optimal step size can be computed analytically. Our experiments on standard datasets provide compelling evidence that our approach outperforms all existing baselines including the previous LP based approach for dense CRFs.Comment: 24 pages, 10 figures and 4 table
    • …
    corecore